Design Engineering

EPFL researchers create method to ensure humans can control AI learning

Staff   

Automation AI EPFL machine learning Robotics

AI engineers have developed a method that prevents machines from eventually learning how to circumvent human commands.

AI machine learning

The researchers from right to left: Rachid Guerraoui, Alexandre Maurer, El Mahdi El Mhamdi, from the Distributed Programming Laboratory, EPFL. Photo Credit: Alain Herzog/EPFL.

In artificial intelligence (AI), machines learn from an iterative process — through trial and error. However, researchers at Ecole Polytechnique Fédérale de Lausanne are asking if there is a better way or if this process can spin out of control.

“AI will always seek to avoid human intervention and create a situation where it can’t be stopped,” says Rachid Guerraoui, a professor at EPFL’s Distributed Programming Laboratory and co-author of the EPFL study.

And this can be a scary thought. In order to avoid this challenge, AI engineers must prevent machines from eventually learning how to circumvent human commands.

EPFL researchers studying this problem have discovered a way for human operators to keep control of a group of AI robots. Their work makes a major contribution to the development of autonomous vehicles and drones, for example, so that they will be able to operate safely in numbers.

Advertisement

One machine-learning method used in AI is reinforcement learning, where agents are rewarded for performing certain actions. Applying this technique to AI, engineers use a points system where machines earn points by carrying out the right actions.

“The challenge isn’t to stop the robot, but rather to program it so that the interruption doesn’t change its learning process — and doesn’t induce it to optimize its behavior in such a way as to avoid being stopped,” says Guerraoui.

In 2016, researchers from Google DeepMind and the Future of Humanity Institute at Oxford University developed a learning protocol that prevents machines from learning from interruptions and thereby becoming uncontrollable. The robot’s reward point system is great for dealing with single robots but may not work with multiple interacting systems.

AI is increasingly being used in applications involving dozens of machines, such as self-driving cars on the road or drones in the air.

“That makes things a lot more complicated, because the machines start learning from each other — especially in the case of interruptions. They learn not only from how they are interrupted individually, but also from how the others are interrupted,” says Alexandre Maurer, one of the study’s authors.

Hadrien Hendrikx, another researcher involved in the study, gives the example of two self-driving cars following each other on a narrow road where they can’t pass each other. They must reach their destination as quickly as possible — without breaking any traffic laws — and humans in the cars can take over control at any time. If the human in the first car brakes often, the second car will adapt its behavior each time and eventually get confused as to when to brake, possibly staying too close to the first car or driving too slowly.

The EPFL researchers have developed a new method to resolve these challenges through “safe interruptibility.” The method lets humans interrupt AI learning processes when necessary  while making sure that the interruptions don’t change the way the machines learn.

“We add ‘forgetting’ mechanisms to the learning algorithms that essentially delete bits of a machine’s memory. It’s kind of like the flash device in Men in Black,” says El Mahdi El Mhamdi, another author of the study.

The researchers altered the machines’ learning and reward system so that it’s not affected by interruptions. It’s like if a parent punishes one child, that doesn’t affect the learning processes of the other children in the family.

Maurer explains that the team worked with existing algorithms and demonstrated that their method can work no matter how complicated the AI system is.

Currently, autonomous machines that use reinforcement learning are not common. ”

This system works really well when the consequences of making mistakes are minor,” says El Mhamdi. “In full autonomy and without human supervision, it couldn’t be used in the self-driving shuttle buses in Sion, for instance, for safety reasons. However, we could simulate the shuttle buses and the city of Sion and run an AI algorithm that awards and subtracts points as the shuttle-bus system learns. That’s the kind of simulation that’s being done at Tesla, for example. Once the system has undergone enough of this learning, we could install the pre-trained algorithm in a self-driving car with a low exploration rate, as this would allow for more widespread use.” And, of course, while making sure humans still have the last word.

www.epfl.ch

Advertisement

Stories continue below

Print this page

Related Stories